Interpretable heartbeat classification using local model-agnostic explanations on ECGs
نویسندگان
چکیده
Treatment and prevention of cardiovascular diseases often rely on Electrocardiogram (ECG) interpretation. Dependent the physician's variability, ECG interpretation is subjective prone to errors. Machine learning models are developed used support doctors; however, their lack interpretability stands as one main drawbacks widespread operation. This paper focuses an Explainable Artificial Intelligence (XAI) solution make heartbeat classification more explainable using several state-of-the-art model-agnostic methods. We introduce a high-level conceptual framework for time series propose original method that adds temporal dependency between samples series' derivative. The results were validated in MIT-BIH arrhythmia dataset: we performed performance's analysis evaluate whether explanations fit model's behaviour; employed 1-D Jaccard's index compare subsequences extracted from interpretable model XAI methods used. Our show use raw signal its derivative includes promote explanation. A small but informative user study concludes this potential visual produced by our being adopted real-world clinical settings, either diagnostic aids or training resource.
منابع مشابه
Local Interpretable Model-Agnostic Explanations for Music Content Analysis
The interpretability of a machine learning model is essential for gaining insight into model behaviour. While some machine learning models (e.g., decision trees) are transparent, the majority of models used today are still black-boxes. Recent work in machine learning aims to analyse these models by explaining the basis of their decisions. In this work, we extend one such technique, called local...
متن کاملMAGIX: Model Agnostic Globally Interpretable Explanations
Explaining the behavior of a black box machine learning model at the instance level is useful for building trust. However, what is also important is understanding how the model behaves globally. Such an understanding provides insight into both the data on which the model was trained and the generalization power of the rules it learned. We present here an approach that learns rules to explain gl...
متن کاملAnchors: High-Precision Model-Agnostic Explanations
We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different mode...
متن کاملNothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model’s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model’s behavior, precision to how accurate humans are in those predictions, and e...
متن کاملInterpretable and Informative Explanations of Outcomes
In this paper, we solve the following data summarization problem: given a multi-dimensional data set augmented with a binary attribute, how can we construct an interpretable and informative summary of the factors affecting the binary attribute in terms of the combinations of values of the dimension attributes? We refer to such summaries as explanation tables. We show the hardness of constructin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computers in Biology and Medicine
سال: 2021
ISSN: ['0010-4825', '1879-0534']
DOI: https://doi.org/10.1016/j.compbiomed.2021.104393